Parameter Learning in PRISM Programs with Continuous Random Variables
نویسندگان
چکیده
Probabilistic Logic Programming (PLP), exemplified by Sato and Kameya’s PRISM, Poole’s ICL, De Raedt et al’s ProbLog and Vennekens et al’s LPAD, combines statistical and logical knowledge representation and inference. Inference in these languages is based on enumerative construction of proofs over logic programs. Consequently, these languages permit very limited use of random variables with continuous distributions. In this paper, we extend PRISM with Gaussian random variables and linear equality constraints, and consider the problem of parameter learning in the extended language. Many statistical models such as finite mixture models and Kalman filter can be encoded in extended PRISM. Our EM-based learning algorithm uses a symbolic inference procedure that represents sets of derivations without enumeration. This permits us to learn the distribution parameters of extended PRISM programs with discrete as well as Gaussian variables. The learning algorithm naturally generalizes the ones used for PRISM and Hybrid Bayesian Networks.
منابع مشابه
Course on Artificial Intelligence and Intelligent Systems Logical-Statistical models and parameter learning in the PRISM system
3 More on using PRISM illustrated by a Bayesian Network 4 3.1 Defining a Bayesian network in PRISM . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Estimating probabilities for hidden random variables . . . . . . . . . . . . . . . . . . 6 3.3 Learning probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.4 A final remark on learning . . . . . . . . . . ....
متن کاملInference in probabilistic logic programs with continuous random variables
Probabilistic Logic Programming (PLP), exemplified by Sato and Kameya’s PRISM, Poole’s ICL, Raedt et al’s ProbLog and Vennekens et al’s LPAD, is aimed at combining statistical and logical knowledge representation and inference. However, the inference techniques used in these works rely on enumerating sets of explanations for a query answer. Consequently, these languages permit very limited use ...
متن کاملInference and Learning in Probabilistic Logic Programs with Continuous Random Variables
of the Dissertation Inference and Learning in Probabilistic Logic Programs with Continuous Random Variables
متن کاملGenerative Modeling with Failure in PRISM
PRISM is a logic-based Turing-complete symbolicstatistical modeling language with a built-in parameter learning routine. In this paper,we enhance the modeling power of PRISM by allowing general PRISM programs to fail in the generation process of observable events. Introducing failure extends the class of definable distributions but needs a generalization of the semantics of PRISM programs. We p...
متن کاملViterbi training in PRISM
VT (Viterbi training), or hard EM, is an efficient way of parameter learning for probabilistic models with hidden variables. Given an observation y , it searches for a state of hidden variables x that maximizes p(x , y | θ) by coordinate ascent on parameters θ and x . In this paper we introduce VT to PRISM, a logic-based probabilistic modeling system for generative models. VT improves PRISM in ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1203.4287 شماره
صفحات -
تاریخ انتشار 2012